The usage of deep neural networks in safety-critical systems is limited by our ability to guarantee their correct behavior. Runtime monitors are components aiming to identify unsafe predictions and discard them before they can lead to catastrophic consequences. Several recent works on runtime monitoring have focused on out-of-distribution (OOD) detection, i.e., identifying inputs that are different from the training data. In this work, we argue that OOD detection is not a well-suited framework to design efficient runtime monitors and that it is more relevant to evaluate monitors based on their ability to discard incorrect predictions. We call this setting out-ofmodel-scope detection and discuss the conceptual differences with OOD. We also conduct extensive experiments on popular datasets from the literature to show that studying monitors in the OOD setting can be misleading: 1. very good OOD results can give a false impression of safety, 2. comparison under the OOD setting does not allow identifying the best monitor to detect errors. Finally, we also show that removing erroneous training data samples helps to train better monitors.
translated by 谷歌翻译
随着机器学习(ML)在关键自主系统中的越来越多的使用,已经开发出运行时监视器来检测预测错误并使系统在操作过程中保持安全状态。已经提出了针对涉及各种感知任务和ML模型的不同应用,并将监视器进行了监视,并将特定的评估程序和指标用于不同的环境。本文介绍了三个统一面向安全的指标,代表了监视器的安全益处(安全增益),使用后的剩余安全差距(残留危险)以及对系统性能(可用性成本)的负面影响。要计算这些指标,需要定义两个返回功能,代表给定的ML预测如何影响预期的未来奖励和危害。三个用例(分类,无人机登陆和自动驾驶)用于证明如何根据建议的指标来表示文献的指标。这些示例的实验结果表明,不同的评估选择如何影响监视器的感知性能。由于我们的形式主义要求我们制定明确的安全假设,因此它使我们能够确保进行评估与高级系统要求符合。
translated by 谷歌翻译
This paper presents the OPUS ecosystem with a focus on the development of open machine translation models and tools, and their integration into end-user applications, development platforms and professional workflows. We discuss our on-going mission of increasing language coverage and translation quality, and also describe on-going work on the development of modular translation models and speed-optimized compact solutions for real-time translation on regular desktops and small devices.
translated by 谷歌翻译
A recent popular approach to out-of-distribution (OOD) detection is based on a self-supervised learning technique referred to as contrastive learning. There are two main variants of contrastive learning, namely instance and class discrimination, targeting features that can discriminate between different instances for the former, and different classes for the latter. In this paper, we aim to understand the effectiveness and limitation of existing contrastive learning methods for OOD detection. We approach this in 3 ways. First, we systematically study the performance difference between the instance discrimination and supervised contrastive learning variants in different OOD detection settings. Second, we study which in-distribution (ID) classes OOD data tend to be classified into. Finally, we study the spectral decay property of the different contrastive learning approaches and examine how it correlates with OOD detection performance. In scenarios where the ID and OOD datasets are sufficiently different from one another, we see that instance discrimination, in the absence of fine-tuning, is competitive with supervised approaches in OOD detection. We see that OOD samples tend to be classified into classes that have a distribution similar to the distribution of the entire dataset. Furthermore, we show that contrastive learning learns a feature space that contains singular vectors containing several directions with a high variance which can be detrimental or beneficial to OOD detection depending on the inference approach used.
translated by 谷歌翻译
There is an increasing need in our society to achieve faster advances in Science to tackle urgent problems, such as climate changes, environmental hazards, sustainable energy systems, pandemics, among others. In certain domains like chemistry, scientific discovery carries the extra burden of assessing risks of the proposed novel solutions before moving to the experimental stage. Despite several recent advances in Machine Learning and AI to address some of these challenges, there is still a gap in technologies to support end-to-end discovery applications, integrating the myriad of available technologies into a coherent, orchestrated, yet flexible discovery process. Such applications need to handle complex knowledge management at scale, enabling knowledge consumption and production in a timely and efficient way for subject matter experts (SMEs). Furthermore, the discovery of novel functional materials strongly relies on the development of exploration strategies in the chemical space. For instance, generative models have gained attention within the scientific community due to their ability to generate enormous volumes of novel molecules across material domains. These models exhibit extreme creativity that often translates in low viability of the generated candidates. In this work, we propose a workbench framework that aims at enabling the human-AI co-creation to reduce the time until the first discovery and the opportunity costs involved. This framework relies on a knowledge base with domain and process knowledge, and user-interaction components to acquire knowledge and advise the SMEs. Currently,the framework supports four main activities: generative modeling, dataset triage, molecule adjudication, and risk assessment.
translated by 谷歌翻译
With climate change predicted to increase the likelihood of landslide events, there is a growing need for rapid landslide detection technologies that help inform emergency responses. Synthetic Aperture Radar (SAR) is a remote sensing technique that can provide measurements of affected areas independent of weather or lighting conditions. Usage of SAR, however, is hindered by domain knowledge that is necessary for the pre-processing steps and its interpretation requires expert knowledge. We provide simplified, pre-processed, machine-learning ready SAR datacubes for four globally located landslide events obtained from several Sentinel-1 satellite passes before and after a landslide triggering event together with segmentation maps of the landslides. From this dataset, using the Hokkaido, Japan datacube, we study the feasibility of SAR-based landslide detection with supervised deep learning (DL). Our results demonstrate that DL models can be used to detect landslides from SAR data, achieving an Area under the Precision-Recall curve exceeding 0.7. We find that additional satellite visits enhance detection performance, but that early detection is possible when SAR data is combined with terrain information from a digital elevation model. This can be especially useful for time-critical emergency interventions. Code is made publicly available at https://github.com/iprapas/landslide-sar-unet.
translated by 谷歌翻译
The goal of autonomous vehicles is to navigate public roads safely and comfortably. To enforce safety, traditional planning approaches rely on handcrafted rules to generate trajectories. Machine learning-based systems, on the other hand, scale with data and are able to learn more complex behaviors. However, they often ignore that agents and self-driving vehicle trajectory distributions can be leveraged to improve safety. In this paper, we propose modeling a distribution over multiple future trajectories for both the self-driving vehicle and other road agents, using a unified neural network architecture for prediction and planning. During inference, we select the planning trajectory that minimizes a cost taking into account safety and the predicted probabilities. Our approach does not depend on any rule-based planners for trajectory generation or optimization, improves with more training data and is simple to implement. We extensively evaluate our method through a realistic simulator and show that the predicted trajectory distribution corresponds to different driving profiles. We also successfully deploy it on a self-driving vehicle on urban public roads, confirming that it drives safely without compromising comfort. The code for training and testing our model on a public prediction dataset and the video of the road test are available at https://woven.mobi/safepathnet
translated by 谷歌翻译
农作物管理,包括氮(N)受精和灌溉管理,对农作物产量,经济利润和环境产生了重大影响。尽管存在管理指南,但要在特定的种植环境和农作物中找到最佳的管理实践是挑战。先前的工作使用加强学习(RL)和作物模拟器来解决该问题,但是训练有素的政策要么具有有限的性能,要么在现实世界中不可部署。在本文中,我们提出了一种智能作物管理系统,该系统通过RL,模仿学习(IL)同时优化N受精和灌溉,并使用农业技术决策系统(DSSAT)进行了作物模拟。我们首先使用Deep RL,尤其是Deep Q-Network来培训需要从模拟器中的所有状态信息作为观测值(表示为完整观察)的管理政策。然后,我们援引IL来培训管理政策,这些政策只需要有限的国家信息,这些信息可以通过模仿以前的RL训练有素的政策在全面观察中轻松获得的国家(表示为部分观察)。我们在佛罗里达州使用玉米的案例研究进行实验,并将受过训练的政策与玉米管理指南进行比较。我们在全面观察和部分观察中训练有素的政策取得了更好的结果,从而获得更高的利润或类似的利润,而环境影响较小。此外,部分观察管理政策在使用易于使用的信息时直接在现实世界中部署。
translated by 谷歌翻译
在复杂,非结构化和动态环境中导航的董事会机器人基于在线事件的感知技术可能会遭受进入事件速率及其处理时间的不可预测的变化,这可能会导致计算溢出或响应能力损失。本文提出了尽快的:一种新型的事件处理框架,该框架将事件传输到处理算法,保持系统响应能力并防止溢出。尽快由两种自适应机制组成。第一个通过丢弃传入事件的自适应百分比来防止事件处理溢出。第二种机制动态调整事件软件包的大小,以减少事件生成和处理之间的延迟。ASAP保证了收敛性,并且对处理算法具有灵活性。它已在具有挑战性的条件下在船上进行了验证。
translated by 谷歌翻译
事件摄像机可以通过非常高的时间分辨率和动态范围来捕获像素级照明变化。由于对照明条件和运动模糊的稳健性,他们获得了越来越多的研究兴趣。文献中存在两种主要方法,用于喂养基于事件的处理算法:在事件软件包中包装触发的事件并将它们逐一发送作为单个事件。这些方法因处理溢出或缺乏响应性而受到限制。当算法无法实时处理所有事件时,处理溢出是由高事件产生速率引起的。相反,当事件包的频率太低时,事件包的生成率低时,缺乏响应率会发生。本文提出了尽快的自适应方案,该方案是通过可容纳事件软件包处理时间的可变大小软件包来管理事件流的。实验结果表明,ASAP能够以响应性和有效的方式喂食异步事件聚类算法,同时又可以防止溢出。
translated by 谷歌翻译